Goto

Collaborating Authors

 look-ahead meta learning


Look-ahead Meta Learning for Continual Learning

Neural Information Processing Systems

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. By incorporating the modulation of per-parameter learning rates in our meta-learning update, our approach also allows us to draw connections to and exploit prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.


Review for NeurIPS paper: Look-ahead Meta Learning for Continual Learning

Neural Information Processing Systems

Weaknesses: I found some issues with the experiments, that I list in the following: Line 215 states that experiments refer to "task incremental settings". This term has a specific meaning in CL literature [3,4]: it usually means "multihead", i.e. task labels are given at inference time. I understand that this is the setting that is featured in section 5.2. Recent literature [1, 2, 5, 6] argues that this setting is trivial and that the Single-head/Class-Incremental setting (i.e. Providing Class-IL results could therefore be of great help to understand how LA-MAML performs in a more challenging setting.


Review for NeurIPS paper: Look-ahead Meta Learning for Continual Learning

Neural Information Processing Systems

The authors proposed a simple but effective modification of MAML to adapt it to the continual setting. The analysis is also sound and experiments convincing. Finally, the rebuttal clarified questions and concerns raised by the reviewers. This is a solid contribution, timely, and of interest to the community.


Look-ahead Meta Learning for Continual Learning

Neural Information Processing Systems

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. By incorporating the modulation of per-parameter learning rates in our meta-learning update, our approach also allows us to draw connections to and exploit prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods.


Reproducibility Report: La-MAML: Look-ahead Meta Learning for Continual Learning

Joseph, Joel, Gu, Alex

arXiv.org Artificial Intelligence

The Continual Learning (CL) problem involves performing well on a sequence of tasks under limited compute. Current algorithms in the domain are either slow, offline or sensitive to hyper-parameters. La-MAML, an optimization-based meta-learning algorithm claims to be better than other replay-based, prior-based and meta-learning based approaches. According to the MER paper [1], metrics to measure performance in the continual learning arena are Retained Accuracy (RA) and Backward Transfer-Interference (BTI). La-MAML claims to perform better in these values when compared to the SOTA in the domain. This is the main claim of the paper, which we shall be verifying in this report.